480 research outputs found
An Evolutionary Approach to Drug-Design Using Quantam Binary Particle Swarm Optimization Algorithm
The present work provides a new approach to evolve ligand structures which
represent possible drug to be docked to the active site of the target protein.
The structure is represented as a tree where each non-empty node represents a
functional group. It is assumed that the active site configuration of the
target protein is known with position of the essential residues. In this paper
the interaction energy of the ligands with the protein target is minimized.
Moreover, the size of the tree is difficult to obtain and it will be different
for different active sites. To overcome the difficulty, a variable tree size
configuration is used for designing ligands. The optimization is done using a
quantum discrete PSO. The result using fixed length and variable length
configuration are compared.Comment: 4 pages, 6 figures (Published in IEEE SCEECS 2012). arXiv admin note:
substantial text overlap with arXiv:1205.641
Misspecified Linear Bandits
We consider the problem of online learning in misspecified linear stochastic
multi-armed bandit problems. Regret guarantees for state-of-the-art linear
bandit algorithms such as Optimism in the Face of Uncertainty Linear bandit
(OFUL) hold under the assumption that the arms expected rewards are perfectly
linear in their features. It is, however, of interest to investigate the impact
of potential misspecification in linear bandit models, where the expected
rewards are perturbed away from the linear subspace determined by the arms
features. Although OFUL has recently been shown to be robust to relatively
small deviations from linearity, we show that any linear bandit algorithm that
enjoys optimal regret performance in the perfectly linear setting (e.g., OFUL)
must suffer linear regret under a sparse additive perturbation of the linear
model. In an attempt to overcome this negative result, we define a natural
class of bandit models characterized by a non-sparse deviation from linearity.
We argue that the OFUL algorithm can fail to achieve sublinear regret even
under models that have non-sparse deviation.We finally develop a novel bandit
algorithm, comprising a hypothesis test for linearity followed by a decision to
use either the OFUL or Upper Confidence Bound (UCB) algorithm. For perfectly
linear bandit models, the algorithm provably exhibits OFULs favorable regret
performance, while for misspecified models satisfying the non-sparse deviation
property, the algorithm avoids the linear regret phenomenon and falls back on
UCBs sublinear regret scaling. Numerical experiments on synthetic data, and on
recommendation data from the public Yahoo! Learning to Rank Challenge dataset,
empirically support our findings.Comment: Thirty-First AAAI Conference on Artificial Intelligence, 201
- …